Skip to content

Conversation

@Pierre-vh
Copy link
Contributor

@Pierre-vh Pierre-vh commented Oct 2, 2025

They were previously optimized to not emit any waitcnt, which is technically correct because there is no reordering of operations at workgroup scope in CU mode for GFX10+.

This breaks transitivity however, for example if we have the following sequence of events in one thread:

  • some stores
  • store atomic release syncscope("workgroup")
  • barrier

then another thread follows with

  • barrier
  • load atomic acquire
  • store atomic release syncscope("agent")

It does not work because, while the other thread sees the stores, it cannot release them at the wider scope. Our release fences aren't strong enough to "wait" on stores from other waves.

We also cannot strengthen our release fences any further to allow for releasing other wave's stores because only GFX12 can do that with global_wb. GFX10-11 do not have the writeback instruction.
It'd also add yet another level of complexity to code sequences, with both acquire/release having CU-mode only alternatives.
Lastly, acq/rel are always used together. The price for synchronization has to be paid either at the acq, or the rel. Strengthening the releases would just make the memory model more complex but wouldn't help performance.

So the choice here is to streamline the code sequences by making CU and WGP mode emit almost identical (vL0 inv is not needed in CU mode) code for release (or stronger) atomic ordering.

This also removes the vm_vsrc(0) wait before barriers. Now that the release fence in CU mode is strong enough, it is no longer needed.

Supersedes #160501
Solves SC1-6454

Copy link
Contributor Author

Pierre-vh commented Oct 2, 2025

@github-actions
Copy link

github-actions bot commented Oct 2, 2025

⚠️ undef deprecator found issues in your code. ⚠️

You can test this locally with the following command:
git diff -U0 --pickaxe-regex -S '([^a-zA-Z0-9#_-]undef([^a-zA-Z0-9_-]|$)|UndefValue::get)' 'HEAD~1' HEAD llvm/lib/Target/AMDGPU/SIMemoryLegalizer.cpp llvm/test/CodeGen/AMDGPU/GlobalISel/memory-legalizer-atomic-fence.ll llvm/test/CodeGen/AMDGPU/lds-dma-workgroup-release.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-barriers.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-fence-mmra-global.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-fence.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-flat-volatile.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-flat-workgroup.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-global-volatile.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-global-workgroup.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-local-agent.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-local-cluster.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-local-system.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-local-volatile.ll llvm/test/CodeGen/AMDGPU/memory-legalizer-local-workgroup.ll

The following files introduce new uses of undef:

  • llvm/test/CodeGen/AMDGPU/GlobalISel/memory-legalizer-atomic-fence.ll

Undef is now deprecated and should only be used in the rare cases where no replacement is possible. For example, a load of uninitialized memory yields undef. You should use poison values for placeholders instead.

In tests, avoid using undef and having tests that trigger undefined behavior. If you need an operand with some unimportant value, you can add a new argument to the function and use that instead.

For example, this is considered a bad practice:

define void @fn() {
  ...
  br i1 undef, ...
}

Please use the following instead:

define void @fn(i1 %cond) {
  ...
  br i1 %cond, ...
}

Please refer to the Undefined Behavior Manual for more information.

@Pierre-vh Pierre-vh marked this pull request as ready for review October 2, 2025 08:24
@llvmbot
Copy link
Member

llvmbot commented Oct 2, 2025

@llvm/pr-subscribers-llvm-globalisel

@llvm/pr-subscribers-backend-amdgpu

Author: Pierre van Houtryve (Pierre-vh)

Changes

They were previously optimized to not emit any waitcnt, which is technically correct because there is no reordering of operations at workgroup scope in CU mode for GFX10+.

This breaks transitivity however, for example if we have the following sequence of events in one thread:

  • some stores
  • store atomic release syncscope("workgroup")
  • barrier

then another thread follows with

  • barrier
  • load atomic acquire
  • store atomic release syncscope("agent")

It does not work because, while the other thread sees the stores, it cannot release them at the wider scope. Our release fences aren't strong enough to "wait" on stores from other waves.

We also cannot strengthen our release fences any further to allow for releasing other wave's stores because only GFX12 can do that with global_wb. GFX10-11 do not have the writeback instruction.
It'd also add yet another level of complexity to code sequences, with both acquire/release having CU-mode only alternatives.
Lastly, acq/rel are always used together. The price for synchronization has to be paid either at the acq, or the rel. Strengthening the releases would just make the memory model more complex but wouldn't help performance.

So the choice here is to streamline the code sequences by making CU and WGP mode emit identical code for release (or stronger) atomic ordering.

This also removes the vm_vsrc(0) wait before barriers. Now that the release fence in CU mode is strong enough, it is no longer needed.

Supersedes #160501
Solves SC1-6454


Patch is 428.16 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/161638.diff

16 Files Affected:

  • (modified) llvm/docs/AMDGPUUsage.rst (+3-58)
  • (modified) llvm/lib/Target/AMDGPU/SIMemoryLegalizer.cpp (+14-37)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/memory-legalizer-atomic-fence.ll (+24-6)
  • (modified) llvm/test/CodeGen/AMDGPU/lds-dma-workgroup-release.ll (-1)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-barriers.ll (+8-12)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-fence-mmra-global.ll (+48)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-fence.ll (+48-9)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-flat-volatile.ll (+8-3)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-flat-workgroup.ll (+601-185)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-global-volatile.ll (+8-3)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-global-workgroup.ll (+540-92)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-local-agent.ll (+240-90)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-local-cluster.ll (+240-90)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-local-system.ll (+240-90)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-local-volatile.ll (+8-3)
  • (modified) llvm/test/CodeGen/AMDGPU/memory-legalizer-local-workgroup.ll (+240-90)
diff --git a/llvm/docs/AMDGPUUsage.rst b/llvm/docs/AMDGPUUsage.rst
index 74b7604fda56d..cba86b3d5447e 100644
--- a/llvm/docs/AMDGPUUsage.rst
+++ b/llvm/docs/AMDGPUUsage.rst
@@ -13229,9 +13229,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
      store atomic release      - workgroup    - global   1. s_waitcnt lgkmcnt(0) &
                                               - generic     vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit vmcnt(0) and
-                                                             vscnt(0).
                                                            - If OpenCL, omit
                                                              lgkmcnt(0).
                                                            - Could be split into
@@ -13277,8 +13274,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
                                                          2. buffer/global/flat_store
      store atomic release      - workgroup    - local    1. s_waitcnt vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit.
                                                            - If OpenCL, omit.
                                                            - Could be split into
                                                              separate s_waitcnt
@@ -13366,9 +13361,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
      atomicrmw    release      - workgroup    - global   1. s_waitcnt lgkmcnt(0) &
                                               - generic     vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit vmcnt(0) and
-                                                             vscnt(0).
                                                            - If OpenCL, omit lgkmcnt(0).
                                                            - Could be split into
                                                              separate s_waitcnt
@@ -13413,8 +13405,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
                                                          2. buffer/global/flat_atomic
      atomicrmw    release      - workgroup    - local    1. s_waitcnt vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit.
                                                            - If OpenCL, omit.
                                                            - Could be split into
                                                              separate s_waitcnt
@@ -13498,9 +13488,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
      fence        release      - workgroup    *none*     1. s_waitcnt lgkmcnt(0) &
                                                             vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit vmcnt(0) and
-                                                             vscnt(0).
                                                            - If OpenCL and
                                                              address space is
                                                              not generic, omit
@@ -13627,9 +13614,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
      atomicrmw    acq_rel      - workgroup    - global   1. s_waitcnt lgkmcnt(0) &
                                                             vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit vmcnt(0) and
-                                                             vscnt(0).
                                                            - If OpenCL, omit
                                                              lgkmcnt(0).
                                                            - Must happen after
@@ -13681,8 +13665,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
                                                          2. buffer/global_atomic
                                                          3. s_waitcnt vm/vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit.
                                                            - Use vmcnt(0) if atomic with
                                                              return and vscnt(0) if
                                                              atomic with no-return.
@@ -13707,8 +13689,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
 
      atomicrmw    acq_rel      - workgroup    - local    1. s_waitcnt vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit.
                                                            - If OpenCL, omit.
                                                            - Could be split into
                                                              separate s_waitcnt
@@ -13768,9 +13748,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
      atomicrmw    acq_rel      - workgroup    - generic  1. s_waitcnt lgkmcnt(0) &
                                                             vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit vmcnt(0) and
-                                                             vscnt(0).
                                                            - If OpenCL, omit lgkmcnt(0).
                                                            - Could be split into
                                                              separate s_waitcnt
@@ -13816,9 +13793,9 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
                                                          3. s_waitcnt lgkmcnt(0) &
                                                             vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit vmcnt(0) and
-                                                             vscnt(0).
+                                                           - If atomic with return, omit
+                                                             vscnt(0), if atomic with
+                                                             no-return, omit vmcnt(0).
                                                            - If OpenCL, omit lgkmcnt(0).
                                                            - Must happen before
                                                              the following
@@ -13991,9 +13968,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
      fence        acq_rel      - workgroup    *none*     1. s_waitcnt lgkmcnt(0) &
                                                             vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit vmcnt(0) and
-                                                             vscnt(0).
                                                            - If OpenCL and
                                                              address space is
                                                              not generic, omit
@@ -14223,9 +14197,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
      load atomic  seq_cst      - workgroup    - global   1. s_waitcnt lgkmcnt(0) &
                                               - generic     vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit vmcnt(0) and
-                                                             vscnt(0).
                                                            - Could be split into
                                                              separate s_waitcnt
                                                              vmcnt(0), s_waitcnt
@@ -14334,8 +14305,6 @@ table :ref:`amdgpu-amdhsa-memory-model-code-sequences-gfx10-gfx11-table`.
 
                                                          1. s_waitcnt vmcnt(0) & vscnt(0)
 
-                                                           - If CU wavefront execution
-                                                             mode, omit.
                                                            - Could be split into
                                                              separate s_waitcnt
                                                              vmcnt(0) and s_waitcnt
@@ -15337,8 +15306,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL, omit ``s_wait_dscnt 0x0``.
                                                            - The waits can be
@@ -15384,8 +15351,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL, omit.
                                                            - The waits can be
@@ -15479,8 +15444,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL, omit ``s_wait_dscnt 0x0``.
                                                            - If OpenCL and CU wavefront
@@ -15530,8 +15493,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL, omit all.
                                                            - The waits can be
@@ -15623,8 +15584,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL, omit ``s_wait_dscnt 0x0``.
                                                            - If OpenCL and
@@ -15754,8 +15713,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL, omit ``s_wait_dscnt 0x0``.
                                                            - Must happen after
@@ -15812,8 +15769,6 @@ the instruction in the code sequence that references the table.
                                                             | **Atomic without return:**
                                                             | ``s_wait_storecnt 0x0``
 
-                                                           - If CU wavefront execution
-                                                             mode, omit.
                                                            - Must happen before
                                                              the following
                                                              ``global_inv``.
@@ -15838,8 +15793,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL, omit.
                                                            - The waits can be
@@ -15901,8 +15854,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL, omit ``s_wait_loadcnt 0x0``.
                                                            - The waits can be
@@ -16154,8 +16105,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL and
                                                              address space is
@@ -16384,8 +16333,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL, omit
                                                              ``s_wait_dscnt 0x0``
@@ -16492,8 +16439,6 @@ the instruction in the code sequence that references the table.
                                                             | ``s_wait_storecnt 0x0``
                                                             | ``s_wait_loadcnt 0x0``
                                                             | ``s_wait_dscnt 0x0``
-                                                            | **CU wavefront execution mode:**
-                                                            | ``s_wait_dscnt 0x0``
 
                                                            - If OpenCL, omit all.
                                                            - The waits can be
diff --git a/llvm/lib/Target/AMDGPU/SIMemoryLegalizer.cpp b/llvm/lib/Target/AMDGPU/SIMemoryLegalizer.cpp
index 362ef140a28f8..29c326893539b 100644
--- a/llvm/lib/Target/AMDGPU/SIMemoryLegalizer.cpp
+++ b/llvm/lib/Target/AMDGPU/SIMemoryLegalizer.cpp
@@ -390,12 +390,6 @@ class SICacheControl {
                              bool IsCrossAddrSpaceOrdering,
                              Position Pos) const = 0;
 
-  /// Inserts any necessary instructions before the barrier start instruction
-  /// \p MI in order to support pairing of barriers and fences.
-  virtual bool insertBarrierStart(MachineBasicBlock::iterator &MI) const {
-    return false;
-  };
-
   /// Virtual destructor to allow derivations to be deleted.
   virtual ~SICacheControl() = default;
 };
@@ -576,12 +570,8 @@ class SIGfx10CacheControl : public SIGfx7CacheControl {
                   bool IsCrossAddrSpaceOrdering, Position Pos,
                   AtomicOrdering Order, bool AtomicsOnly) const override;
 
-  bool insertAcquire(MachineBasicBlock::iterator &MI,
-                     SIAtomicScope Scope,
-                     SIAtomicAddrSpace AddrSpace,
-                     Position Pos) const override;
-
-  bool insertBarrierStart(MachineBasicBlock::iterator &MI) const override;
+  bool insertAcquire(MachineBasicBlock::iterator &MI, SIAtomicScope Scope,
+                     SIAtomicAddrSpace AddrSpace, Position Pos) const override;
 };
 
 class SIGfx11CacheControl : public SIGfx10CacheControl {
@@ -2046,8 +2036,11 @@ bool SIGfx10CacheControl::insertWait(MachineBasicBlock::iterator &MI,
       // the WGP. Therefore need to wait for operations to complete to ensure
       // they are visible to waves in the other CU as the L0 is per CU.
       // Otherwise in CU mode and all waves of a work-group are on the same CU
-      // which shares the same L0.
-      if (!ST.isCuModeEnabled()) {
+      // which shares the same L0. Note that we still need to wait when
+      // performing a release in this mode to respect the transitivity of
+      // happens-before, e.g. other waves of the workgroup must be able to
+      // release the memory from another wave at a wider scope.
+      if (!ST.isCuModeEnabled() || isReleaseOrStronger(Order)) {
         if ((Op & SIMemOp::LOAD) != SIMemOp::NONE)
           VMCnt |= true;
         if ((Op & SIMemOp::STORE) != SIMemOp::NONE)
@@ -2202,22 +2195,6 @@ bool SIGfx10CacheControl::insertAcquire(MachineBasicBlock::i...
[truncated]

Copy link
Collaborator

@ssahasra ssahasra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, although I would have preferred modifying #160501 so that the whole discussion was immediately reachable from git blame without an extra hop of seeing which PR was superseded by this one.

Copy link
Contributor Author

Pierre-vh commented Oct 20, 2025

Merge activity

  • Oct 20, 8:59 AM UTC: A user started a stack merge that includes this pull request via Graphite.
  • Oct 20, 10:05 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 10:12 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 10:16 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 10:22 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 10:29 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 10:34 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 10:40 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 10:56 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 11:09 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 11:17 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 11:20 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 11:40 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 11:49 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 11:56 AM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 12:01 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 12:05 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 12:20 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 12:23 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 12:36 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 12:41 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 1:04 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 1:14 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 1:32 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 1:47 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 1:50 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 1:55 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 2:07 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 2:16 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 2:19 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 2:21 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 2:30 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 2:41 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 2:48 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 2:51 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 2:54 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 2:59 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 3:17 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 3:24 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 3:26 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 3:29 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 3:56 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 4:06 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 4:16 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 4:19 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 4:22 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 4:28 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 4:31 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 4:35 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 4:37 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 4:57 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 5:00 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 5:04 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 5:12 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 5:21 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 5:32 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 5:44 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 6:03 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 6:07 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 6:10 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 6:14 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 6:17 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 6:36 PM UTC: Graphite rebased this pull request as part of a merge.
  • Oct 20, 7:12 PM UTC: Graphite rebased this pull request as part of a merge.

@Pierre-vh Pierre-vh force-pushed the users/pierre-vh/remove-extra-wait-after-rmw branch 5 times, most recently from acc43ac to 0a3ec0f Compare October 20, 2025 09:30
Base automatically changed from users/pierre-vh/remove-extra-wait-after-rmw to main October 20, 2025 10:03
@Pierre-vh Pierre-vh force-pushed the users/pierre-vh/fix-cumode-fence-gfx10 branch 10 times, most recently from 72c7723 to c10b131 Compare October 20, 2025 11:17
@Pierre-vh Pierre-vh force-pushed the users/pierre-vh/fix-cumode-fence-gfx10 branch 24 times, most recently from c434cb5 to 0d155a5 Compare October 20, 2025 18:36
They were previously optimized to not emit any waitcnt, which is technically correct because there is no reordering of operations at workgroup scope in CU mode for GFX10+.

This breaks transitivity however, for example if we have the following sequence of events in one thread:

- some stores
- store atomic release syncscope("workgroup")
- barrier

then another thread follows with

- barrier
- load atomic acquire
- store atomic release syncscope("agent")

It does not work because, while the other thread sees the stores, it cannot release them at the wider scope. Our release fences aren't strong enough to "wait" on stores from other waves.

We also cannot strengthen our release fences any further to allow for releasing other wave's stores because only GFX12 can do that with `global_wb`. GFX10-11 do not have the writeback instruction.
It'd also add yet another level of complexity to code sequences, with both acquire/release having CU-mode only alternatives.
Lastly, acq/rel are always used together. The price for synchronization has to be paid either at the acq, or the rel. Strengthening the releases would just make the memory model more complex but wouldn't help performance.

So the choice here is to streamline the code sequences by making  CU and WGP mode emit identical code for release (or stronger) atomic ordering.

This also removes the `vm_vsrc(0)` wait before barriers. Now that the release fence in CU mode is strong enough, it is no longer needed.
@Pierre-vh Pierre-vh force-pushed the users/pierre-vh/fix-cumode-fence-gfx10 branch from 0d155a5 to aaa8d1b Compare October 20, 2025 19:11
@Pierre-vh
Copy link
Contributor Author

Pierre-vh force-pushed the users/pierre-vh/fix-cumode-fence-gfx10 branch 15 times, most recently from 9989044 to d89770d

Is this normal graphite behaviour? It is sending out a lot of email notifications!

I think we might be seeing this: #151135

The workaround is to land each PR in a stack individually, one after the other.

My apologies for the inconvenience, I thought it had merged and logged off for the day. I cancelled the merge.
Next time I will merge manually because this isn't the first time graphite does this.
Feel free to close my PRs if it happens again, I can always re-open them after.

@Pierre-vh Pierre-vh merged commit 07d47c7 into main Oct 21, 2025
9 of 11 checks passed
@Pierre-vh Pierre-vh deleted the users/pierre-vh/fix-cumode-fence-gfx10 branch October 21, 2025 07:23
ronlieb added a commit to ROCm/llvm-project that referenced this pull request Oct 21, 2025
* [flang] Fix standalone build regression from llvm#161179 (llvm#164309)

Fix incorrect linking and dependencies introduced in llvm#161179 that break
standalone builds of Flang.

Signed-off-by: Michał Górny <[email protected]>

* [AMDGPU] Remove magic constants from V_PK_ADD_F32 pattern. NFC (llvm#164335)

* [AMDGPU] Update code sequence for CU-mode Release Fences in GFX10+ (llvm#161638)

They were previously optimized to not emit any waitcnt, which is
technically correct because there is no reordering of operations at
workgroup scope in CU mode for GFX10+.

This breaks transitivity however, for example if we have the following
sequence of events in one thread:

- some stores
- store atomic release syncscope("workgroup")
- barrier

then another thread follows with

- barrier
- load atomic acquire
- store atomic release syncscope("agent")

It does not work because, while the other thread sees the stores, it
cannot release them at the wider scope. Our release fences aren't strong
enough to "wait" on stores from other waves.

We also cannot strengthen our release fences any further to allow for
releasing other wave's stores because only GFX12 can do that with
`global_wb`. GFX10-11 do not have the writeback instruction.
It'd also add yet another level of complexity to code sequences, with
both acquire/release having CU-mode only alternatives.
Lastly, acq/rel are always used together. The price for synchronization
has to be paid either at the acq, or the rel. Strengthening the releases
would just make the memory model more complex but wouldn't help
performance.

So the choice here is to streamline the code sequences by making CU and
WGP mode emit almost identical (vL0 inv is not needed in CU mode) code
for release (or stronger) atomic ordering.

This also removes the `vm_vsrc(0)` wait before barriers. Now that the
release fence in CU mode is strong enough, it is no longer needed.

Supersedes llvm#160501
Solves SC1-6454

* [InstSimplify] Support ptrtoaddr in simplifyGEPInst() (llvm#164262)

This adds support for ptrtoaddr in the `ptradd p, ptrtoaddr(p2) -
ptrtoaddr(p) -> p2` fold.

This fold requires that p and p2 have the same underlying object
(otherwise the provenance may not be the same).

The argument I would like to make here is that because the underlying
objects are the same (and the pointers in the same address space), the
non-address bits of the pointer must be the same. Looking at some
specific cases of underlying object relationship:

 * phi/select: Trivially true.
* getelementptr: Only modifies address bits, non-address bits must
remain the same.
* addrspacecast round-trip cast: Must preserve all bits because we
optimize such round-trip casts away.
* non-interposable global alias: I'm a bit unsure about this one, but I
guess the alias and the aliasee must have the same non-address bits?
* various intrinsics like launder.invariant.group, ptrmask. I think
these all either preserve all pointer bits (like the invariant.group
ones) or at least the non-address bits (like ptrmask). There are some
interesting cases like amdgcn.make.buffer.rsrc, but those are cross
address-space.

-----

There is a second `gep (gep p, C), (sub 0, ptrtoint(p)) -> C` transform
in this function, which I am not extending to handle ptrtoaddr, adding
negative tests instead. This transform is overall dubious for provenance
reasons, but especially dubious with ptrtoaddr, as then we don't have
the guarantee that provenance of `p` has been exposed.

* [Hexagon] Add REQUIRES: asserts to test

This test uses -debug-only, so needs an assertion-enabled build.

* [AArch64] Combing scalar_to_reg into DUP if the DUP already exists (llvm#160499)

If we already have a dup(x) as part of the DAG along with a
scalar_to_vec(x), we can re-use the result of the dup to the
scalar_to_vec(x).

* [CAS] OnDiskGraphDB - fix MSVC "not all control paths return a value" warnings. NFC. (llvm#164369)

* Reapply "[libc++] Optimize __hash_table::erase(iterator, iterator)" (llvm#162850)

This reapplication fixes the use after free caused by not properly
updating the bucket list in one case.

Original commit message:
Instead of just calling the single element `erase` on every element of
the range, we can combine some of the operations in a custom
implementation. Specifically, we don't need to search for the previous
node or re-link the list every iteration. Removing this unnecessary work
results in some nice performance improvements:
```
-----------------------------------------------------------------------------------------------------------------------
Benchmark                                                                                             old           new
-----------------------------------------------------------------------------------------------------------------------
std::unordered_set<int>::erase(iterator, iterator) (erase half the container)/0                    457 ns        459 ns
std::unordered_set<int>::erase(iterator, iterator) (erase half the container)/32                   995 ns        626 ns
std::unordered_set<int>::erase(iterator, iterator) (erase half the container)/1024               18196 ns       7995 ns
std::unordered_set<int>::erase(iterator, iterator) (erase half the container)/8192              124722 ns      70125 ns
std::unordered_set<std::string>::erase(iterator, iterator) (erase half the container)/0            456 ns        461 ns
std::unordered_set<std::string>::erase(iterator, iterator) (erase half the container)/32          1183 ns        769 ns
std::unordered_set<std::string>::erase(iterator, iterator) (erase half the container)/1024       27827 ns      18614 ns
std::unordered_set<std::string>::erase(iterator, iterator) (erase half the container)/8192      266681 ns     226107 ns
std::unordered_map<int, int>::erase(iterator, iterator) (erase half the container)/0               455 ns        462 ns
std::unordered_map<int, int>::erase(iterator, iterator) (erase half the container)/32              996 ns        659 ns
std::unordered_map<int, int>::erase(iterator, iterator) (erase half the container)/1024          15963 ns       8108 ns
std::unordered_map<int, int>::erase(iterator, iterator) (erase half the container)/8192         136493 ns      71848 ns
std::unordered_multiset<int>::erase(iterator, iterator) (erase half the container)/0               454 ns        455 ns
std::unordered_multiset<int>::erase(iterator, iterator) (erase half the container)/32              985 ns        703 ns
std::unordered_multiset<int>::erase(iterator, iterator) (erase half the container)/1024          16277 ns       9085 ns
std::unordered_multiset<int>::erase(iterator, iterator) (erase half the container)/8192         125736 ns      82710 ns
std::unordered_multimap<int, int>::erase(iterator, iterator) (erase half the container)/0          457 ns        454 ns
std::unordered_multimap<int, int>::erase(iterator, iterator) (erase half the container)/32        1091 ns        646 ns
std::unordered_multimap<int, int>::erase(iterator, iterator) (erase half the container)/1024     17784 ns       7664 ns
std::unordered_multimap<int, int>::erase(iterator, iterator) (erase half the container)/8192    127098 ns      72806 ns
```


This reverts commit acc3a62.

* [TableGen] List the indices of sub-operands (llvm#163723)

Some instances of the `Operand` class used in Tablegen instruction
definitions expand to a cluster of multiple operands at the MC layer,
such as complex addressing modes involving base + offset + shift, or
clusters of operands describing conditional Arm instructions or
predicated MVE instructions. There's currently no convenient way for C++
code to know the offset of one of those sub-operands from the start of
the cluster: instead it just hard-codes magic numbers like `index+2`,
which is hard to read and fragile.

This patch adds an extra piece of output to `InstrInfoEmitter` to define
those instruction offsets, based on the name of the `Operand` class
instance in Tablegen, and the names assigned to the sub-operands in the
`MIOperandInfo` field. For example, if target Foo were to define

  def Bar : Operand {
    let MIOperandInfo = (ops GPR:$first, i32imm:$second);
    // ...
  }

then the new constants would be `Foo::SUBOP_Bar_first` and
`Foo::SUBOP_Bar_second`, defined as 0 and 1 respectively.

As an example, I've converted some magic numbers related to the MVE
predication operand types (`vpred_n` and its superset `vpred_r`) to use
the new named constants in place of the integer literals they previously
used. This is more verbose, but also clearer, because it explains why
the integer is chosen instead of what its value is.

* [lldb] Add bidirectional packetLog to gdbclientutils.py (llvm#162176)

While debugging the tests for llvm#155000 I found it helpful to have both
sides
of the simulated gdb-rsp traffic rather than just the responses so I've
extended
the packetLog in MockGDBServerResponder to record traffic in both
directions.
Tests have been updated accordingly

* [MLIR] [Vector] Added canonicalizer for folding from_elements + transpose (llvm#161841)

## Description
Adds a new canonicalizer that folds
`vector.from_elements(vector.transpose))` => `vector.from_elements`.
This canonicalization reorders the input elements for
`vector.from_elements`, adjusts the output shape to match the effect of
the transpose op and eliminating its need.

## Testing
Added a 2D vector lit test that verifies the working of the rewrite.

---------

Signed-off-by: Keshav Vinayak Jha <[email protected]>

* [DA] Add initial support for monotonicity check (llvm#162280)

The dependence testing functions in DA assume that the analyzed AddRec
does not wrap over the entire iteration space. For AddRecs that may
wrap, DA should conservatively return unknown dependence. However, no
validation is currently performed to ensure that this condition holds,
which can lead to incorrect results in some cases.

This patch introduces the notion of *monotonicity* and a validation
logic to check whether a SCEV is monotonic. The monotonicity check
classifies the SCEV into one of the following categories:

- Unknown: Nothing is known about the monotonicity of the SCEV.
- Invariant: The SCEV is loop-invariant.
- MultivariateSignedMonotonic: The SCEV doesn't wrap in a signed sense
for any iteration of the loops in the loop nest.

The current validation logic basically searches an affine AddRec
recursively and checks whether the `nsw` flag is present. Notably, it is
still unclear whether we should also have a category for unsigned
monotonicity.
The monotonicity check is still under development and disabled by
default for now. Since such a check is necessary to make DA sound, it
should be enabled by default once the functionality is sufficient.

Split off from llvm#154527.

* [VPlan] Use VPlan::getRegion to shorten code (NFC) (llvm#164287)

* [VPlan] Improve code using m_APInt (NFC) (llvm#161683)

* [SystemZ] Avoid trunc(add(X,X)) patterns (llvm#164378)

Replace with trunc(add(X,Y)) to avoid premature folding in upcoming patch llvm#164227

* [clang][CodeGen] Emit `llvm.tbaa.errno` metadata during module creation

Let Clang emit `llvm.tbaa.errno` metadata in order to let LLVM
carry out optimizations around errno-writing libcalls to, as
long as it is proved the involved memory location does not alias
`errno`.

Previous discussion: https://discourse.llvm.org/t/rfc-modelling-errno-memory-effects/82972.

* [LV][NFC] Remove undef from phi incoming values (llvm#163762)

Split off from PR llvm#163525, this standalone patch replaces
 use of undef as incoming PHI values with zero, in order
 to reduce the likelihood of contributors hitting the
 `undef deprecator` warning in github.

* [DA] Add option to enable specific dependence test only (llvm#164245)

PR llvm#157084 added an option `da-run-siv-routines-only` to run only SIV
routines in DA. This PR replaces that option with a more fine-grained
one that allows to select other than SIV routines as well. This option
is useful for regression testing of individual DA routines. This patch
also reorganizes regression tests that use `da-run-siv-routines-only`.

* [libcxx] Optimize `std::generate_n` for segmented iterators (llvm#164266)

Part of llvm#102817.

This is a natural follow-up to llvm#163006. We are forwarding
`std::generate_n` to `std::__for_each_n` (`std::for_each_n` needs
c++17), resulting in improved performance for segmented iterators.

before:

```
std::generate_n(deque<int>)/32          17.5 ns         17.3 ns     40727273
std::generate_n(deque<int>)/50          25.7 ns         25.5 ns     26352941
std::generate_n(deque<int>)/1024         490 ns          487 ns      1445161
std::generate_n(deque<int>)/8192        3908 ns         3924 ns       179200
```

after:

```
std::generate_n(deque<int>)/32          11.1 ns         11.0 ns     64000000
std::generate_n(deque<int>)/50          16.1 ns         16.0 ns     44800000
std::generate_n(deque<int>)/1024         291 ns          292 ns      2357895
std::generate_n(deque<int>)/8192        2269 ns         2250 ns       298667
```

* [BOLT] Check entry point address is not in constant island (llvm#163418)

There are cases where `addEntryPointAtOffset` is called with a given
`Offset` that points to an address within a constant island. This
triggers `assert(!isInConstantIsland(EntryPointAddress)` and causes BOLT
to crash. This patch adds a check which ignores functions that would add
such entry points and warns the user.

* [llvm][dwarfdump] Pretty-print DW_AT_language_version (llvm#164222)

In both verbose and non-verbose mode we will now use the
`llvm::dwarf::LanguageDescription` to turn the version into a human
readable string. In verbose mode we also display the raw version code
(similar to how we display addresses in verbose mode). To make the
version code and prettified easier to distinguish, we print the
prettified name in colour (if available), which is consistent with how
`DW_AT_language` is printed in colour.

Before:
```
0x0000000c: DW_TAG_compile_unit                                                                           
              DW_AT_language_name       (DW_LNAME_C)                                                      
              DW_AT_language_version    (201112)             
```
After:
```
0x0000000c: DW_TAG_compile_unit                                                                           
              DW_AT_language_name       (DW_LNAME_C)                                                      
              DW_AT_language_version    (201112 C11)                                                             
```

---------

Signed-off-by: Michał Górny <[email protected]>
Signed-off-by: Keshav Vinayak Jha <[email protected]>
Co-authored-by: Michał Górny <[email protected]>
Co-authored-by: Stanislav Mekhanoshin <[email protected]>
Co-authored-by: Pierre van Houtryve <[email protected]>
Co-authored-by: Nikita Popov <[email protected]>
Co-authored-by: David Green <[email protected]>
Co-authored-by: Simon Pilgrim <[email protected]>
Co-authored-by: Nikolas Klauser <[email protected]>
Co-authored-by: Simon Tatham <[email protected]>
Co-authored-by: Daniel Sanders <[email protected]>
Co-authored-by: Keshav Vinayak Jha <[email protected]>
Co-authored-by: Ryotaro Kasuga <[email protected]>
Co-authored-by: Ramkumar Ramachandra <[email protected]>
Co-authored-by: Antonio Frighetto <[email protected]>
Co-authored-by: David Sherwood <[email protected]>
Co-authored-by: Connector Switch <[email protected]>
Co-authored-by: Asher Dobrescu <[email protected]>
Co-authored-by: Michael Buch <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants